7 research outputs found

    A Compositionality Machine Realized by a Hierarchic Architecture of Synfire Chains

    Get PDF
    The composition of complex behavior is thought to rely on the concurrent and sequential activation of simpler action components, or primitives. Systems of synfire chains have previously been proposed to account for either the simultaneous or the sequential aspects of compositionality; however, the compatibility of the two aspects has so far not been addressed. Moreover, the simultaneous activation of primitives has up until now only been investigated in the context of reactive computations, i.e., the perception of stimuli. In this study we demonstrate how a hierarchical organization of synfire chains is capable of generating both aspects of compositionality for proactive computations such as the generation of complex and ongoing action. To this end, we develop a network model consisting of two layers of synfire chains. Using simple drawing strokes as a visualization of abstract primitives, we map the feed-forward activity of the upper level synfire chains to motion in two-dimensional space. Our model is capable of producing drawing strokes that are combinations of primitive strokes by binding together the corresponding chains. Moreover, when the lower layer of the network is constructed in a closed-loop fashion, drawing strokes are generated sequentially. The generated pattern can be random or deterministic, depending on the connection pattern between the lower level chains. We propose quantitative measures for simultaneity and sequentiality, revealing a wide parameter range in which both aspects are fulfilled. Finally, we investigate the spiking activity of our model to propose candidate signatures of synfire chain computation in measurements of neural activity during action execution

    Meeting the Memory Challenges of Brain-Scale Network Simulation

    Get PDF
    The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project

    Increasing quality and managing complexity in neuroinformatics software development with continuous integration

    Get PDF
    High quality neuroscience research requires accurate, reliable and well maintained neuroinformatics applications. As software projects become larger, offering more functionality and developing a denser web of interdependence between their component parts, we need more sophisticated methods to manage their complexity. If complexity is allowed to get out of hand, either the quality of the software or the speed of development suffer, and in many cases both. To address this issue, here we develop a scalable, low-cost and open source solution for continuous integration (CI), a technique which ensures the quality of changes to the code base during the development procedure, rather than relying on a pre-release integration phase. We demonstrate that a CI based workflow, due to rapid feedback about code integration problems and tracking of code health measures, enabled substantial increases in productivity for a major neuroinformatics project and additional benefits for three further projects. Beyond the scope of the current study, we identify multiple areas in which CI can be employed to further increase the quality of neuroinformatics projects by improving development practices and incorporating appropriate development tools. Finally, we discuss what measures can be taken to lower the barrier for developers of neuroinformatics applications to adopt this useful technique

    Modeling the calcium spike as a threshold triggered fixed waveform for synchronous inputs in the fluctuation regime

    No full text
    Modeling the layer 5 pyramidal neuron as a system of three connected isopotential compartments, the soma, proximal and distal compartment, with calcium spike dynamics in the distal compartment following first order kinetics, we are able to reproduce in-vitro experimental results which demonstrate the involvement of calcium spikes in action potentials generation. To explore how calcium spikes affect the neuronal output in-vivo, we emulate in-vivo like conditions by embedding the neuron model in a regime of low background fluctuations with occasional large synchronous inputs. In such a regime, a full calcium spike is only triggered by the synchronous events in a threshold like manner and has a stereotypical waveform. Hence, in such a regime, we are able to replace the calcium dynamics with a simpler threshold triggered current of fixed waveform, which is amenable to analytical treatment. We obtain analytically the mean somatic membrane potential excursion due to a calcium spike being triggered while in the fluctuating regime. Our analytical form that accounts for the covariance between conductances and the membrane potential shows a better agreement with simulation results than a naive first order approximation

    Automatic generation of connectivity for large-scale neuronal network models through structural plasticity

    No full text
    Abstract With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behaviour and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modelling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modelled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework

    Liquid computing on and off the edge of chaos with a striatal microcircuit

    No full text
    In reinforcement learning theories of the basal ganglia, there is a need for the expected rewards corresponding to relevant environmental states to be maintained and modified during the learning process. However, the representation of these states that allows them to be associated with reward expectations remains unclear. Previous studies have tended to rely on pre-defined partitioning of states encoded by disjunct neuronal groups or sparse topological drives. A more likely scenario is that striatal neurons are involved in the encoding of multiple different states through their spike patterns, and that an appropriate partitioning of an environment is learned on the basis of task constraints, thus minimizing the number of states involved in solving a particular task. Here we show that striatal activity is sufficient to implement a liquid state, an important prerequisite for such a computation, whereby transient patterns of striatal activity are mapped onto the relevant states. We develop a simple small scale model of the striatum which can reproduce key features of the experimentally observed activity of the major cell types of the striatum. We then use the activity of this network as input for the supervised training of four simple linear readouts to learn three different functions on a plane, where the network is stimulated with the spike coded position of the agent. We discover that the network configuration that best reproduces striatal activity statistics lies on the edge of chaos and has good performance on all three tasks, but that in general, the edge of chaosis a poor predictor of network performance

    Supercomputers ready for use as discovery machines for neuroscience

    No full text
    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience
    corecore